skip to main content


Search for: All records

Creators/Authors contains: "Yang, Kang"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The water vapor transport associated with latent heat flux (LE) in the planetary boundary layer (PBL) is critical for the atmospheric hydrological cycle, radiation balance, and cloud formation. The spatiotemporal variability of LE and water vapor mixing ratio (rv) are poorly understood due to the scale‐dependent and nonlinear atmospheric transport responses to land surface heterogeneity. Here, airborne in situ measurements with the wavelet technique are utilized to investigate scale‐dependent relationships among LE, vertical velocity (w) variance (), andrvvariance () over a heterogeneous surface during the Chequamegon Heterogeneous Ecosystem Energy‐balance Study Enabled by a High‐density Extensive Array of Detectors 2019 (CHEESEHEAD19) field campaign. Our findings reveal distinct scale distributions of LE, , and at 100 m height, with a majority scale range of 120 m–4 km in LE, 32 m–2 km in , and 200 m–8 km in . The scales are classified into three scale ranges, the turbulent scale (8–200 m), large‐eddy scale (200 m–2 km), and mesoscale (2–8 km) to evaluate scale‐resolved LE contributed by and . The large‐eddy scale in PBL contributes over 70% of the monthly mean total LE with equal parts (50%) of contributions from and . The monthly temporal variations mainly come from the first two major contributing classified scales in LE, , and . These results confirm the dominant role of the large‐eddy scale in the PBL in the vertical moisture transport from the surface to the PBL, while the mesoscale is shown to contribute an additional ∼20%. This analysis complements published scale‐dependent LE variations, which lack detailed scale‐dependent vertical velocity and moisture information.

     
    more » « less
    Free, publicly-accessible full text available February 16, 2025
  2. This paper presents GeoDMA , which processes the GPS data from multiple vehicles to detect anomalous driving maneuvers, such as rapid acceleration, sudden braking, and rapid swerving. First, an unsupervised deep auto-encoder is designed to learn a set of unique features from the normal historical GPS data of all drivers. We consider the temporal dependency of the driving data for individual drivers and the spatial correlation among different drivers. Second, to incorporate the peer dependency of drivers in local regions, we develop a geographical partitioning algorithm to partition a city into several sub-regions to do the driving anomaly detection. Specifically, we extend the vehicle-vehicle dependency to road-road dependency and formulate the geographical partitioning problem into an optimization problem. The objective of the optimization problem is to maximize the dependency of roads within each sub-region and minimize the dependency of roads between any two different sub-regions. Finally, we train a specific driving anomaly detection model for each sub-region and perform in-situ updating of these models by incremental training. We implement GeoDMA in Pytorch and evaluate its performance using a large real-world GPS trajectories. The experiment results demonstrate that GeoDMA achieves up to 8.5% higher detection accuracy than the baseline methods. 
    more » « less
    Free, publicly-accessible full text available May 31, 2024
  3. null (Ed.)
    Zero-knowledge (ZK) proofs with an optimal memory footprint have attracted a lot of attention, because such protocols can easily prove very large computation with a small memory requirement. Such ZK protocol only needs O(M) memory for both parties, where M is the memory required to verify the statement in the clear. In this paper, we propose several new ZK protocols in this setting, which improve the concrete efficiency and, at the same time, enable sublinear amortized communication for circuits with some notion of relaxed uniformity. 1. In the circuit-based model, where the computation is represented as a circuit over a field, our ZK protocol achieves a communication complexity of 1 field element per non-linear gate for any field size while keeping the computation very cheap. We implemented our protocol, which shows extremely high efficiency and affordability. Compared to the previous best-known implementation, we achieve 6×–7× improvement in computation and 3×– 7× improvement in communication. When running on intro-level AWS instances, our protocol only needs one US dollar to prove one trillion AND gates (or 2.5 US dollars for one trillion multiplication gates over a 61-bit field). 2. In the setting where part of the computation can be represented as a set of polynomials, we can achieve communication sublinear to the polynomial size: the communication only depends on the input size and the highest degree of all polynomials, independent of the number of polynomials and the number of multiplications in the polynomials. Using the improved ZK protocol, we can prove matrix multiplication with communication proportional to the input size, rather than the number of multiplications. Proving the multiplication of two 1024 × 1024 matrices, our implementation, with one thread and 1 GB of memory, only needs 10 seconds and communicates 25 MB, 35× faster than the state-of-the-art protocol Virgo that would need more than 140 GB of memory for the same task. 
    more » « less
  4. This paper studies the effectiveness of joint compression and denoising strategies with realistic, long-term guided wave structural health monitoring data. We leverage the high correlation between nearby collections of guided waves in time to create sparse and low-rank representations. While compression and denoising schemes are not new, they are almost exclusively designed and studied with relatively simple datasets. In contrast, guided wave structural health monitoring datasets have much more complex operational and environmental conditions, such as temperature, that distort data and for which the requirements to achieve effective compression and denoising are not well understood. The paper studies how to optimize our data collection and algorithms to best utilize guided wave data for compression, denoising, and damage detection based on seven million guided wave measurements collected over 2 years.

     
    more » « less
  5. Abstract

    While guided wave structural health monitoring (SHM) is widely researched for ensuring safety, estimating performance deterioration, and detecting damage in structures, it experiences setbacks in accuracy due to varying environmental, sensor, and material factors. To combat these challenges, environmentally variable guided wave data is often stretched with temperature compensation methods, such as the scale transform and optimal signal stretch, to match a baseline signal and enable accurate damage detection. Yet, these methods fail for large environmental changes. This paper addresses this challenge by demonstrating a machine learning method to predict stretch factors. This is accomplished with feed-forward neural networks that approximate the complex velocity change function. We demonstrate that our machine learning approach outperforms the prior art on simulated Lamb wave data and is robust with extreme velocity variations. While our machine learning models do not conduct temperature compensation, their accurate stretch factor predictions serve as a proof of concept that a better model is plausible.

     
    more » « less
  6. null (Ed.)
    Environmental effects are a significant challenge in guided wave structural health monitoring systems. These effects distort signals and increase the likelihood of false alarms. Many research papers have studied mitigation strategies for common variations in guided wave datasets reproducible in a lab, such as temperature and stress. There are fewer studies and strategies for detecting damage under more unpredictable outdoor conditions. This article proposes a long short-term principal component analysis reconstruction method to detect synthetic damage under highly variational environments, like precipitation, freeze, and other conditions. The method does not require any temperature or other compensation methods and is tested by approximately seven million guided wave measurements collected over 2 years. Results show that our method achieves an area under curve score of near 0.95 when detecting synthetic damage under highly variable environmental conditions. 
    more » « less